28 research outputs found

    CNN color demosaicking generalizes for any CFA

    No full text
    A convolutional neural network is trained in auto/heteroassociative mode for reconstructing RGB components from a randomly mosaicked color image. The trained network was shown to perform equally well when images are sampled periodically or with a different random mosaic. Therefore, this model is able to generalize on every type of color filter array. We attribute this property of universal demosaicking to the network learning the statistical structure of color images independently of the mosaic pattern arrangement

    A study of the Dream Net model robustness across continual learning scenarios

    No full text
    International audienceContinual learning is one of the major challenges of deep learning. For decades, many studies have proposed efficient models overcoming catastrophic forgetting when learning new data. However, as they were focused on providing the best reduceforgetting performance, studies have moved away from reallife applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. Therefore, there is a growing need to define new scenarios to assess the robustness of existing methods with those challenges in mind. The issue of data availability during training is another essential point in the development of solid continual learning algorithms. Depending on the streaming formulation, the model needs in the more extreme scenarios to be able to adapt to new data as soon as it arrives and without the possibility to review it afterwards. In this study, we propose a review of existing continual learning scenarios and their associated terms. Those existing terms and definitions are synthesized in an atlas in order to provide a better overview. Based on two of the main categories defined in the atlas, "Class-IL" and "Domain-IL", we define eight different scenarios with data streams of varying complexity that allow to test the models robustness in changing data arrival scenarios. We choose to evaluate Dream Net-Data Free, a privacy-preserving continual learning algorithm, in each proposed scenario and demonstrate that this model is robust enough to succeed in every proposed scenario, regardless of how the data is presented. We also show that it is competitive with other continual learning literature algorithms that are not privacy preserving which is a clear advantage for real-life humancentered applications

    A study of the Dream Net model robustness across continual learning scenarios

    No full text
    International audienceContinual learning is one of the major challenges of deep learning. For decades, many studies have proposed efficient models overcoming catastrophic forgetting when learning new data. However, as they were focused on providing the best reduceforgetting performance, studies have moved away from reallife applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. Therefore, there is a growing need to define new scenarios to assess the robustness of existing methods with those challenges in mind. The issue of data availability during training is another essential point in the development of solid continual learning algorithms. Depending on the streaming formulation, the model needs in the more extreme scenarios to be able to adapt to new data as soon as it arrives and without the possibility to review it afterwards. In this study, we propose a review of existing continual learning scenarios and their associated terms. Those existing terms and definitions are synthesized in an atlas in order to provide a better overview. Based on two of the main categories defined in the atlas, "Class-IL" and "Domain-IL", we define eight different scenarios with data streams of varying complexity that allow to test the models robustness in changing data arrival scenarios. We choose to evaluate Dream Net-Data Free, a privacy-preserving continual learning algorithm, in each proposed scenario and demonstrate that this model is robust enough to succeed in every proposed scenario, regardless of how the data is presented. We also show that it is competitive with other continual learning literature algorithms that are not privacy preserving which is a clear advantage for real-life humancentered applications

    A study of the Dream Net model robustness across continual learning scenarios

    No full text
    International audienceContinual learning is one of the major challenges of deep learning. For decades, many studies have proposed efficient models overcoming catastrophic forgetting when learning new data. However, as they were focused on providing the best reduceforgetting performance, studies have moved away from reallife applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. Therefore, there is a growing need to define new scenarios to assess the robustness of existing methods with those challenges in mind. The issue of data availability during training is another essential point in the development of solid continual learning algorithms. Depending on the streaming formulation, the model needs in the more extreme scenarios to be able to adapt to new data as soon as it arrives and without the possibility to review it afterwards. In this study, we propose a review of existing continual learning scenarios and their associated terms. Those existing terms and definitions are synthesized in an atlas in order to provide a better overview. Based on two of the main categories defined in the atlas, "Class-IL" and "Domain-IL", we define eight different scenarios with data streams of varying complexity that allow to test the models robustness in changing data arrival scenarios. We choose to evaluate Dream Net-Data Free, a privacy-preserving continual learning algorithm, in each proposed scenario and demonstrate that this model is robust enough to succeed in every proposed scenario, regardless of how the data is presented. We also show that it is competitive with other continual learning literature algorithms that are not privacy preserving which is a clear advantage for real-life humancentered applications

    Dream Net: a privacy preserving continual learning model for face emotion recognition

    No full text
    International audienceContinual learning is a growing challenge of artificial intelligence. Among algorithms alleviating catastrophic forgetting that have been developed in the past years, only few studies were focused on face emotion recognition. In parallel, the field of emotion recognition raised the ethical issue of privacy preserving. This paper presents Dream Net, a privacy preserving continual learning model for face emotion recognition. Using a pseudo-rehearsal approach, this model alleviates catastrophic forgetting by capturing the mapping function of a trained network without storing examples of the learned knowledge. We evaluated Dream Net on the Fer-2013 database and obtained an average accuracy of 45% ± 2 at the end of incremental learning of all classes compare to 16% ± 0 without any continual learning model

    Dream Net: a privacy preserving continual learning model for face emotion recognition

    No full text
    International audienceContinual learning is a growing challenge of artificial intelligence. Among algorithms alleviating catastrophic forgetting that have been developed in the past years, only few studies were focused on face emotion recognition. In parallel, the field of emotion recognition raised the ethical issue of privacy preserving. This paper presents Dream Net, a privacy preserving continual learning model for face emotion recognition. Using a pseudo-rehearsal approach, this model alleviates catastrophic forgetting by capturing the mapping function of a trained network without storing examples of the learned knowledge. We evaluated Dream Net on the Fer-2013 database and obtained an average accuracy of 45% ± 2 at the end of incremental learning of all classes compare to 16% ± 0 without any continual learning model

    AdvisiIL - A class-incremental learning advisor

    No full text
    Recent class-incremental learning methods combine deep neural architectures and learning algorithms to handle streaming data under memory and computational constraints. The performance of existing methods varies depending on the characteristics of the incremental process. To date, there is no other approach than to test all pairs of learning algorithms and neural architectures on the training data available at the start of the learning process to select a suited algorithm-architecture combination. To tackle this problem, in this article, we introduce AdvisIL, a method which takes as input the main characteristics of the incremental process (memory budget for the deep model, initial number of classes, size of incremental steps) and recommends an adapted pair of learning algorithm and neural architecture. The recommendation is based on a similarity between the user-provided settings and a large set of pre-computed experiments.AdvisIL makes class-incremental learning easier, since users do not need to run cumbersome experiments to design their system. We evaluate our method on four datasets under six incremental settings and three deep model sizes. We compare six algorithms and three deep neural architectures. Results show that AdvisIL has better overall performance than any of the individual combinations of a learning algorithm and a neural architecture

    PCM compact model: Optimized methodology for model card extraction

    Get PDF
    International audienceTo achieve high yield on product embedding PCM memory, it is mandatory to provide to designers accuratelycalibrated PCM compact model. To achieve this goal, it is mandatory to develop standardized model card extractionmethodology. In this paper, we present a PCM model card extraction flow based on a minimal set of static and dynamicmeasurements. Based on this measurement, characteristics are first obtained and model card parameters extracted without anyloop back, i.e. each parameter is extracted only once on a given characteristic. After this extraction procedure, model card valuesare validated through a comparison with an extra characteristics SET-Low characteristic not used for the extraction

    Generalization of iterative sampling in autoencoders

    No full text
    Generative autoencoders are designed to model a target distribution with the aim of generating samples and it has also been shown that specific non-generative autoencoders (i.e. contractive and denoising autoencoders) can be turned into gen-erative models using reinjections (i.e. iterative sampling). In this work, we provide mathematical evidence that any autoencoder reproducing the input data with a loss of information can sample from the training distribution using reinjections. More precisely, we prove that the property of modeling a given distribution and sampling from it not only applies to contractive and denoising autoencoders but also to all lossy autoencoders. In accordance with previous results, we emphasize that the reinjection sampling procedure in autoencoders improves the quality of the sampling. We experimentally illustrate the above property by generating synthetic data with non-generative autoencoders trained on standard datasets. We show that the learning curve of a classifier trained with synthetic data is similar to that of a classifier trained with original data

    Phase-Change Memory: A Continuous Multilevel Compact Model of Subthreshold Conduction and Threshold Switching

    No full text
    International audienceA Phase-Change Memory (PCM) compact modeling of the threshold switching based on a thermal runaway in Poole-Frenkel conduction is proposed. Although this approach is often used in physical models, this is the first time it is implemented in a compact model. The model accuracy is validated through a good correlation between simulations and experimental data collected on a PCM cell embedded in a 90nm technology. A wide range of intermediate states is measured and accurately modeled with a single set of parameters , allowing multilevel programing. A good convergence is exhibited even in snapback simulation thanks to this fully continuous approach. Moreover, threshold properties extractions indicate a thermally enhanced switching, which validates the ground hypothesis of the model. Finally, it is shown that this model is compliant with a new drift-resilient cell-state metric. Once enriched with a phase transition module, this compact model is ready to be implemented in circuit simulators
    corecore